4 research outputs found
What and How does In-Context Learning Learn? Bayesian Model Averaging, Parameterization, and Generalization
In this paper, we conduct a comprehensive study of In-Context Learning (ICL)
by addressing several open questions: (a) What type of ICL estimator is learned
by large language models? (b) What is a proper performance metric for ICL and
what is the error rate? (c) How does the transformer architecture enable ICL?
To answer these questions, we adopt a Bayesian view and formulate ICL as a
problem of predicting the response corresponding to the current covariate,
given a number of examples drawn from a latent variable model. To answer (a),
we show that, without updating the neural network parameters, ICL implicitly
implements the Bayesian model averaging algorithm, which is proven to be
approximately parameterized by the attention mechanism. For (b), we analyze the
ICL performance from an online learning perspective and establish a
regret bound for perfectly pretrained ICL, where is the
number of examples in the prompt. To answer (c), we show that, in addition to
encoding Bayesian model averaging via attention, the transformer architecture
also enables a fine-grained statistical analysis of pretraining under realistic
assumptions. In particular, we prove that the error of pretrained model is
bounded by a sum of an approximation error and a generalization error, where
the former decays to zero exponentially as the depth grows, and the latter
decays to zero sublinearly with the number of tokens in the pretraining
dataset. Our results provide a unified understanding of the transformer and its
ICL ability with bounds on ICL regret, approximation, and generalization, which
deepens our knowledge of these essential aspects of modern language models
Learning Regularized Graphon Mean-Field Games with Unknown Graphons
We design and analyze reinforcement learning algorithms for Graphon
Mean-Field Games (GMFGs). In contrast to previous works that require the
precise values of the graphons, we aim to learn the Nash Equilibrium (NE) of
the regularized GMFGs when the graphons are unknown. Our contributions are
threefold. First, we propose the Proximal Policy Optimization for GMFG
(GMFG-PPO) algorithm and show that it converges at a rate of
after iterations with an estimation oracle, improving on a previous work by
Xie et al. (ICML, 2021). Second, using kernel embedding of distributions, we
design efficient algorithms to estimate the transition kernels, reward
functions, and graphons from sampled agents. Convergence rates are then derived
when the positions of the agents are either known or unknown. Results for the
combination of the optimization algorithm GMFG-PPO and the estimation algorithm
are then provided. These algorithms are the first specifically designed for
learning graphons from sampled agents. Finally, the efficacy of the proposed
algorithms are corroborated through simulations. These simulations demonstrate
that learning the unknown graphons reduces the exploitability effectively